25 research outputs found

    Adaptively Lossy Image Compression for Onboard Processing

    Get PDF
    More efficient image-compression codecs are an emerging requirement for spacecraft because increasingly complex, onboard image sensors can rapidly saturate downlink bandwidth of communication transceivers. While these codecs reduce transmitted data volume, many are compute-intensive and require rapid processing to sustain sensor data rates. Emerging next-generation small satellite (SmallSat) computers provide compelling computational capability to enable more onboard processing and compression than previously considered. For this research, we apply two compression algorithms for deployment on modern flight hardware: (1) end-to-end, neural-network-based, image compression (CNN-JPEG); and (2) adaptive image compression through feature-point detection (FPD-JPEG). These algorithms rely on intelligent data-processing pipelines that adapt to sensor data to compress it more effectively, ensuring efficient use of limited downlink bandwidths. The first algorithm, CNN-JPEG, employs a hybrid approach adapted from literature combining convolutional neural networks (CNNs) and JPEG; however, we modify and tune the training scheme for satellite imagery to account for observed training instabilities. This hybrid CNN-JPEG approach shows 23.5% better average peak signal-to-noise ratio (PSNR) and 33.5% better average structural similarity index (SSIM) versus standard JPEG on a dataset collected on the Space Test Program – Houston 5 (STP-H5-CSP) mission onboard the International Space Station (ISS). For our second algorithm, we developed a novel adaptive image-compression pipeline based upon JPEG that leverages the Oriented FAST and Rotated BRIEF (ORB) feature-point detection algorithm to adaptively tune the compression ratio to allow for a tradeoff between PSNR/SSIM and combined file size over a batch of STP-H5-CSP images. We achieve a less than 1% drop in average PSNR and SSIM while reducing the combined file size by 29.6% compared to JPEG using a static quality factor (QF) of 90

    NASA SpaceCube Edge TPU SmallSat Card for Autonomous Operations and Onboard Science-Data Analysis

    Get PDF
    Using state-of-the-art artificial intelligence (AI)frameworks onboard spacecraft is challenging because common spacecraft processors cannot provide comparable performance to data centers with server-grade CPUs and GPUs available for terrestrial applications and advanced deep-learning networks. This limitation makes small, low-power AI microchip architectures, such as the Google Coral Edge Tensor Processing Unit (TPU), attractive for space missions where the application-specific design enables both high-performance and power-efficient computing for AI applications. To address these challenging considerations for space deployment, this research introduces the design and capabilities of a CubeSat-sized Edge TPU-based co-processor card, known as the SpaceCube Low-power Edge Artificial Intelligence Resilient Node (SC-LEARN). This design conforms to NASA’s CubeSat Card Specification (CS2) for integration into next-generation SmallSat and CubeSat systems. This paper describes the overarching architecture and design of the SC-LEARN, as well as, the supporting test card designed for rapid prototyping and evaluation. The SC-LEARN was developed with three operational modes: (1) a high-performance parallel-processing mode,(2)a fault-tolerant mode for onboard resilience, and (3) a power-saving mode with cold spares. Importantly, this research also elaborates on both training and quantization of TensorFlow models for the SC-LEARN for use onboard with representative, open-source datasets. Lastly, we describe future research plans, including radiation-beam testing and flight demonstration

    CASPR: Autonomous Sensor Processing Experiment for STP-H7

    Get PDF
    As computing technologies improve, spacecraft sensors continue to increase in fidelity and resolution, their dataset sizes and data rates increasing concurrently. This increase in data saturates the capabilities of spacecraft-to-ground communications and necessitates the use of powerful onboard computers to process data as it is collected. The pursuit of onboard, autonomous sensor processing while remaining within the power and memory restrictions of embedded computing becomes vital to prevent the saturation of data downlink capabilities. This paper presents a new ISS research experiment to study and evaluate novel technologies in sensors, computers, and intelligent applications for SmallSat-based sensing with autonomous data processing. Configurable and Autonomous Sensor Processing Research (CASPR) is being developed to evaluate autonomous, onboard processing strategies on novel sensors and is set to be installed on the ISS as part of the DoD/NASA Space Test Program –Houston 7(STP-H7) mission. CASPR features a flight-qualified CSP space computer as central node and two flight-ready SSP space computers for apps execution, both from SHREC, a telescopic, multispectral imager from Satlantis Inc., an event-driven neuromorphic vision sensor, an AMD GPU subsystem, and Intel Optane phase-change memory. CASPR is a highly versatile ISS experiment meant to explore many facets of autonomous sensor processing in space

    NASA SpaceCube Next-Generation Artificial-Intelligence Computing for STP-H9-SCENIC on ISS

    Get PDF
    Recently, Artificial Intelligence (AI) and Machine Learning (ML) capabilities have seen an exponential increase in interest from academia and industry that can be a disruptive, transformative development for future missions. Specifically, AI/ML concepts for edge computing can be integrated into future missions for autonomous operation, constellation missions, and onboard data analysis. However, using commercial AI software frameworks onboard spacecraft is challenging because traditional radiation-hardened processors and common spacecraft processors cannot provide the necessary onboard processing capability to effectively deploy complex AI models. Advantageously, embedded AI microchips being developed for the mobile market demonstrate remarkable capability and follow similar size, weight, and power constraints that could be imposed on a space-based system. Unfortunately, many of these devices have not been qualified for use in space. Therefore, Space Test Program - Houston 9 - SpaceCube Edge-Node Intelligent Collaboration (STP-H9-SCENIC) will demonstrate inflight, cutting-edge AI applications on multiple space-based devices for next-generation onboard intelligence. SCENIC will characterize several embedded AI devices in a relevant space environment and will provide NASA and DoD with flight heritage data and lessons learned for developers seeking to enable AI/ML on future missions. Finally, SCENIC also includes new CubeSat form-factor GPS and SDR cards for guidance and navigation

    Case report: Neuronal intranuclear inclusion disease presenting with acute encephalopathy

    Get PDF
    Neuronal intranuclear inclusion disease (NIID), a neurodegenerative disease previously thought to be rare, is increasingly recognized despite heterogeneous clinical presentations. NIID is pathologically characterized by ubiquitin and p-62 positive intranuclear eosinophilic inclusions that affect multiple organ systems, including the brain, skin, and other tissues. Although the diagnosis of NIID is challenging due to phenotypic heterogeneity, a greater understanding of the clinical and imaging presentations can improve accurate and early diagnosis. Here, we present three cases of pathologically proven adult-onset NIID, all presenting with episodes of acute encephalopathy with protracted workups and lengthy time between symptom onset and diagnosis. Case 1 highlights challenges in the diagnosis of NIID when MRI does not reveal classic abnormalities and provides a striking example of hyperperfusion in the setting of acute encephalopathy, as well as unique pathology with neuronal central chromatolysis, which has not been previously described. Case 2 highlights the progression of MRI findings associated with multiple NIID-related encephalopathic episodes over an extended time period, as well as the utility of skin biopsy for antemortem diagnosis

    Modeling the Image Reconstruction Problem in MPI

    No full text
    In this work, we develop a new operator-based approach for MPI image reconstruction that can directly map arbitrary time domain data into images. The method enables efficient, rapid, and accurate image reconstruction using a diversity of transmit and receiver coils while making minimal assumptions regarding the underlying physics. The system model maps the underlying image, x, to the acquired time domain signal, b, by a sequence of 4 linear matrix operators given by A = GVEH. We then solve for x in a system model b = Ax + n, where n is a noise term, using standard approaches such as regularized least squares. The operator H is derived from the Langevin model and is a combination of convolutional operators, E is a row selector matrix that selects the field accessible point (FFP) at each measured time point, V is related to the FFP velocity at each time point, and G incorporates the system filter of the transmit signal. Each of these operators can be implemented efficiently as matrix-vector products, which makes the inverse problem computationally tractable. The method also enables simultaneous application of physical information such as non-negativity constraints and smoothness. Initial results show that our approach effectively and efficiently reconstructs 2D images from simulated and experimental data for multiple transmit and receiver coils. This approach provides a pathway for significantly reducing the computational pipeline of future MPI systems, and it will allow for greater flexibility in MPI scanner design. The current model does not include any nanoparticle relaxation effects and assumes a homogenous B-field. Incorporating these components is planned for future work.&nbsp

    Projection Reconstruction Magnetic Particle Imaging

    No full text

    Data from: A convex formulation for magnetic particle imaging x-space reconstruction

    No full text
    Magnetic Particle Imaging (MPI) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications
    corecore